Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

permit opt tokenizer #1958

Merged
merged 3 commits into from
Feb 14, 2023
Merged

permit opt tokenizer #1958

merged 3 commits into from
Feb 14, 2023

Conversation

bmosaicml
Copy link
Contributor

@bmosaicml bmosaicml commented Feb 9, 2023

What does this PR do?

Add functionality to support ICL with OPT tokenizer (which prepends special tokens each time tokenizer is called). Due to the way we concatenate context and continuation encodings into a single input, it's important that no special tokens are put between them.

GPT Neo 1.3 B gets following accuracy
metrics/piqa/5-shot/InContextLearningMultipleChoiceAccuracy: 0.717934787273407
metrics/lambada/0-shot/InContextLearningLMAccuracy: 0.5883721113204956

OPT 1.3B gets:
metrics/piqa/5-shot/InContextLearningMultipleChoiceAccuracy: 0.719565212726593
metrics/lambada/0-shot/InContextLearningLMAccuracy: 0.5883721113204956

MosaicGPT 1.3B gets:
metrics/piqa/5-shot/InContextLearningMultipleChoiceAccuracy: 0.637499988079071
metrics/lambada/0-shot/InContextLearningLMAccuracy: 0.4075581431388855

What issue(s) does this change relate to?

ICL eval doesn't work with OPT tokenizer since it adds special tokens between contexts/continuations.

Before submitting

  • [ x] Have you read the contributor guidelines?
  • [ x] Is this change a documentation change or typo fix? If so, skip the rest of this checklist.
  • Was this change discussed/approved in a GitHub issue first? It is much more likely to be merged if so.
  • [x ] Did you update any related docs and document your change?
  • [x ] Did you update any related tests and add any new tests related to your change? (see testing)
  • [ x] Did you run the tests locally to make sure they pass?
  • [ x] Did you run pre-commit on your change? (see the pre-commit section of prerequisites)

@bmosaicml bmosaicml marked this pull request as ready for review February 9, 2023 20:21
dakinggg
dakinggg previously approved these changes Feb 9, 2023
Copy link
Contributor

@dakinggg dakinggg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, see the one comment

composer/datasets/in_context_learning_evaluation.py Outdated Show resolved Hide resolved
composer/datasets/in_context_learning_evaluation.py Outdated Show resolved Hide resolved
@bmosaicml bmosaicml force-pushed the feature/support_opt branch from 20240ce to f8b834d Compare February 9, 2023 22:05
@bmosaicml bmosaicml force-pushed the feature/support_opt branch from f8b834d to 6c44713 Compare February 9, 2023 22:23
@abhi-mosaic
Copy link
Contributor

abhi-mosaic commented Feb 10, 2023

@bmosaicml aren't the OPT scores a bit low? From our internal data, I see:

OPT 1.3B piqa 0-shot (not 5-shot): 0.7236
OPT 1.3B lambada 0-shot: 0.588

Vs:

OPT 1.3B gets:
metrics/piqa/5-shot/InContextLearningMultipleChoiceAccuracy: 0.626630425453186
metrics/lambada/0-shot/InContextLearningLMAccuracy: 0.37538760900497437

@bmosaicml
Copy link
Contributor Author

@bmosaicml aren't the OPT scores a bit low? From our internal data, I see:

OPT 1.3B piqa 0-shot (not 5-shot): 0.7236 OPT 1.3B lambada 0-shot: 0.588

Vs:

OPT 1.3B gets:
metrics/piqa/5-shot/InContextLearningMultipleChoiceAccuracy: 0.626630425453186
metrics/lambada/0-shot/InContextLearningLMAccuracy: 0.37538760900497437

yes they are a bit low....where are you getting your numbers from?

@dakinggg dakinggg self-requested a review February 10, 2023 01:40
@dakinggg dakinggg dismissed their stale review February 10, 2023 01:40

Blocking until the low numbers are resolved

@bmosaicml
Copy link
Contributor Author

@dakinggg @abhi-mosaic Sorry, the initial numbers I wrote were for OPT 125m I just retested with OPT 1.3B and the numbers match what we expect!

Copy link
Contributor

@dakinggg dakinggg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fantastic! re-approving :)

@bmosaicml bmosaicml merged commit 1a29fe7 into mosaicml:dev Feb 14, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants